Diversity index

A diversity index is a statistic that increases when the number of types into which a set of entities has been classified increases, and obtains its maximum value for a given number of types when all types are represented by the same number of entities. When diversity indices are used in ecology, the entities of interest are usually individual plants or animals, and the types of interest are species or other taxa. In demography, the entities of interest can be people, and the types of interest various demographic groups, and in information science, the entities can be characters and the types the different letters of the alphabet. The most commonly used diversity indices are simple transformations of the effective number of types (also known as 'true diversity'), but each diversity index can also be interpreted in its own right as a measure corresponding to some real phenomenon (but a different one for each diversity index).[1][2][3][4]

Contents

True diversity (The effective number of types)

True diversity, or the effective number of types, refers to the number of equally-abundant types needed for the average proportional abundance of the types to equal that observed in the dataset of interest (where all types may not be equally abundant). The true diversity in a dataset is calculated by first taking the weighted average of the proportional abundances of the types in the dataset, and then taking the inverse of this. The equation is:[1][3][4]

{}^q\!D={1 \over \sqrt[q-1]{{\sum_{i=1}^S p_i p_i^{q-1}}}}

The denominator equals average proportional abundance of the types in the dataset as calculated with the weighted generalized mean with exponent q - 1. In the equation, S is the total number of types in the dataset, and the proportional abundance of the ith type is p_i. The proportional abundances themselves are used as weights. The value of q defines which kind of mean is used: q = 0 corresponds to the harmonic mean, q = 1 to the geometric mean and q = 2 to the arithmetic mean. As q approaches infinity, the generalized mean approaches the maximum p_i value.

The equation is often written in the equivalent form:

{}^q\!D=\left ( {\sum_{i=1}^S p_i^q} \right )^{1/(1-q)}

The term inside the parentheses is called the basic sum. Some popular diversity indices correspond to the basic sum as calculated with different values of q.[2]

Richness

Richness S simply quantifies how many different types the dataset of interest contains. For example, species richness of a dataset is the number of different species in the corresponding species list. Richness is a simple measure, so it has been a popular diversity index in ecology, where abundance data are often not available for the datasets of interest. Because richness does not take the abundances of the types into account, it is not the same thing as diversity, which does take abundances into account. However, if true diversity is calculated with q = 0, the effective number of types (0D) equals the actual number of types (S).[2][4]

Shannon index

The Shannon index has been a popular diversity index in the ecological literature, where it is also known as Shannon's diversity index, the Shannon-Wiener index, the Shannon-Weaver index, the Shannon-Weiner index and the Shannon entropy. The measure was originally proposed by Claude Shannon to quantify the entropy (uncertainty or information content) in strings of text.[5] The idea is that the more different letters there are, and the more equal their proportional abundances in the string of interest, the more difficult it is to correctly predict which letter will be the next one in the string. The Shannon entropy quantifies the uncertainty (entropy or degree of surprise) associated with this prediction. It is calculated as follows:

 H' = -\sum_{i=1}^S p_i \log p_i

where p_i is the proportion of characters belonging to the ith type of letter in the string of interest. In ecology, p_i is often the proportion of individuals belonging to the ith species in the dataset of interest. Then the Shannon entropy quantifies the uncertainty in predicting the species identity of an individual that is taken at random from the dataset.

The base of the logarithm used when calculating the Shannon entropy can be chosen freely. Shannon himself discussed logarithm bases 2, 10 and e, and these have since become the most popular bases in applications that use the Shannon entropy. Each log base corresponds to a different measurement unit, which have been called binary digits (bits), decimal digits (decits) and natural digits (nats) for the bases 2, 10 and e, respectively. Comparing Shannon entropy values that were originally calculated with different log bases requires converting them to the same log base: change from the base a to base b is obtained with multiplication by logba.[5]

It has been shown that the Shannon index is based on the weighted geometric mean of the proportional abundances of the types, and that it equals the logarithm of true diversity as calculated with q = 1:[3]

 H' = -\sum_{i=1}^S p_i \log p_i = -\sum_{i=1}^S \log p_i^{p_i}

This can also be written

 H' = -(\log p_1^{p_1} %2B\log p_2^{p_2} %2B\log p_3^{p_3} %2B \cdots %2B \log p_S^{p_S})

which equals

 H' = -\log p_1^{p_1}p_2^{p_2}p_3^{p_3} \cdots p_S^{p_S} = \log \left ( {1 \over p_1^{p_1}p_2^{p_2}p_3^{p_3} \cdots p_S^{p_S}} \right )

Since the sum of the p_i values equals unity by definition, the denominator equals the weighted geometric mean of the p_i values, with the p_i values themselves being used as the weights (exponents in the equation). The term within the parentheses hence equals true diversity 1D, and H' equals log(1D).[1]

When all types in the dataset of interest are equally common, all p_i values equal 1/S, and the Shannon index hence takes the value log(S). The more unequal the abundances of the types, the larger the weighted geometric mean of the p_i values, and the smaller the corresponding Shannon entropy. If practically all abundance is concentrated to one type, and the other types are very rare (even if there are many of them), Shannon entropy approaches zero. When there is only one type in the dataset, Shannon entropy exactly equals zero (there is no uncertainty in predicting the type of the next randomly chosen entity).

Simpson index

The Simpson index was introduced in 1949 by Edward H. Simpson to measure the degree of concentration when individuals are classified into types.[6] The same index was rediscovered by Orris C. Herfindahl in 1950.[7] The square root of the index had already been introduced in 1945 by the economist Albert O. Hirschman.[8] As a result, the same measure is usually known as the Simpson index in ecology, and as the Herfindahl index or the Herfindahl-Hirschman index (HHI) in economics.

The measure equals the probability that two entities taken at random from the dataset of interest represent the same type.[6] It equals:

 \lambda = \sum_{i=1}^S p_i^2

This is the weighted arithmetic mean of the proportional abundances p_i of the types of interest, with the proportional abundances themselves being used as the weights.[1] Proportional abundances are by definition constrained to values between zero and unity, but their weighted mean, and hence λ, can never be smaller than 1/S, which is reached when all types are equally abundant.

By comparing the equation used to calculate λ with the equations used to calculate true diversity, it can be seen that 1/λ equals 2D, i.e. true diversity as calculated with q = 2. The original Simpson's index hence equals the corresponding basic sum.[2]

The interpretation of λ as the probability that two entities taken at random from the dataset of interest represent the same type assumes either that the dataset of interest is essentially infinite, or that the first entity is replaced to the dataset before taking the second entity. Sometimes researchers have preferred to assume sampling without replacement from a finite dataset, and then the probability of obtaining the same type with both random draws is:

 l = \frac{\sum_{i=1}^S n_i (n_i -1)}{N (N-1)}

where n_i is the number of entities belonging to the ith type and N is the total number of entities in the dataset.[6]

Since mean proportional abundance of the types increases with decreasing number of types and increasing abundance of the most abundant type, λ obtains small values in datasets of high diversity and large values in datasets of low diversity. This is counterintuitive behavior for a diversity index, so often such transformations of λ that increase with increasing diversity have been used instead. The most popular of such indices have been the inverse Simpson index (1/λ) and the Gini-Simpson index (1 - λ).[1][2] Both of these have also been called the Simpson index in the ecological literature, so care is needed to avoid accidentally comparing the different indices as if they were the same.

Inverse Simpson index

The inverse Simpson index equals:

 1/ \lambda = {1 \over\sum_{i=1}^S p_i^2} = {}^2D

This simply equals true diversity of order 2, i.e. the effective number of types that is obtained when the weighted arithmetic mean is used to quantify average proportional abundance of types in the dataset of interest.

Gini-Simpson index

The original Simpson index λ equals the probability that two entities taken at random from the dataset of interest (with replacement) represent the same type. Its transformation 1 - λ therefore equals the probability that the two entities represent different types. This measure is also known in ecology as the Gini-Simpson index.[2] It can be expressed as a transformation of true diversity of order 2:

 1 - \lambda = 1 - \sum_{i=1}^S p_i^2 = 1 - 1/{}^2D

The Gibbs-Martin index of sociology, psychology and management studies,[9] which is also known as the Blau index, is the same measure as the Gini-Simpson index.

Berger-Parker index

The Berger-Parker index equals the maximum p_i value in the dataset, i.e. the proportional abundance of the most abundant type. This corresponds to the weighted generalized mean of the p_i values when q approaches infinity, and hence equals the inverse of true diversity of order infinity, D.

Rényi entropy

The Rényi entropy is a generalization of the Shannon entropy to other values of q than unity. It can be expressed:

{}^qH = \frac{1}{1-q} \; \log \sum_{i=1}^S p_i^q

which equals

{}^qH = \log {1 \over \sqrt[q-1]{{\sum_{i=1}^S p_i p_i^{q-1}}}} = \log({}^q\!D)

This means that taking the logarithm of true diversity based on any value of q gives the Rényi entropy corresponding to the same value of q.

See also

References

  1. ^ a b c d e Hill, M. O. (1973) Diversity and evenness: a unifying notation and its consequences. Ecology, 54, 427–432
  2. ^ a b c d e f Jost, L. (2006) Entropy and diversity. Oikos, 113, 363–375
  3. ^ a b c Tuomisto, H. (2010) A diversity of beta diversities: straightening up a concept gone awry. Part 1. Defining beta diversity as a function of alpha and gamma diversity. Ecography, 33, 2-22. doi: 10.1111/j.1600-0587.2009.05880.x
  4. ^ a b c Tuomisto, H. 2010. A consistent terminology for quantifying species diversity? Yes, it does exist. Oecologia 4: 853–860. doi: 10.1007/s00442-010-1812-0
  5. ^ a b Shannon, C. E. (1948) A mathematical theory of communication. The Bell System Techincal Journal, 27, 379-423 and 623-656.
  6. ^ a b c Simpson, E. H. (1949) Measurement of diversity. Nature, 163, 688.
  7. ^ Herfindahl, O. C. (1950) Concentration in the U.S. Steel Industry. Unpublished doctoral dissertation, Columbia University.
  8. ^ Hirschman, A. 0. (1945) National power and the structure of foreign trade. Berkeley.
  9. ^ Gibbs, Jack P., and William T. Martin, 1962. “Urbanization, technology and the division of labor.” American Sociological Review 27: 667–77.

Further reading

External links